Goto

Collaborating Authors

 false positive result


Reducing Overtreatment of Indeterminate Thyroid Nodules Using a Multimodal Deep Learning Model

Athreya, Shreeram, Melehy, Andrew, Suthahar, Sujit Silas Armstrong, Ivezić, Vedrana, Radhachandran, Ashwath, Sant, Vivek, Moleta, Chace, Zheng, Henry, Patel, Maitraya, Masamed, Rinat, Arnold, Corey W., Speier, William

arXiv.org Artificial Intelligence

Objective: Molecular testing (MT) classifies cytologically indeterminate thyroid nodules as benign or malignant with high sensitivity but low positive predictive value (PPV), only using molecular profiles, ignoring ultrasound (US) imaging and biopsy. We address this limitation by applying attention multiple instance learning (AMIL) to US images. Methods: We retrospectively reviewed 333 patients with indeterminate thyroid nodules at UCLA medical center (259 benign, 74 malignant). A multi-modal deep learning AMIL model was developed, combining US images and MT to classify the nodules as benign or malignant and enhance the malignancy risk stratification of MT. Results: The final AMIL model matched MT sensitivity (0.946) while significantly improving PPV (0.477 vs 0.448 for MT alone), indicating fewer false positives while maintaining high sensitivity. Conclusion: Our approach reduces false positives compared to MT while maintaining the same ability to identify positive cases, potentially reducing unnecessary benign thyroid resections in patients with indeterminate nodules.


Experts warn prenatal screening tests can lead to false positive results in some cases

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Non-invasive prenatal testing (NIPT) on pregnant women to detect the risk of a fetus having rare genetic abnormalities may often be wrong, according to recent reports. These tests, according to multiple health experts, can actually give false positives, which can create significant angst in expecting parents. Health experts explained to Fox News that NIPT works by taking blood samples from the pregnant mother and then analyzing fragments of free-floating cell-free DNA (cfDNA).


Every Single Way You Can Tell Trump World Is Lying About Its Latest COVID Scandal

Slate

Donald Trump and his former White House chief of staff Mark Meadows are peddling a new story about the ex-president's coronavirus infection. Their first story was that Trump didn't test positive until Oct. 1, 2020, two days after he debated Joe Biden. Then Meadows admitted in his new book, The Chief's Chief, that Trump actually tested positive on Sept. 26, three days before the debate. That admission was problematic, since Trump never informed Biden--or hundreds of other unwitting people who interacted closely with the maskless president in the intervening five days--about the test result. So now Trump and Meadows have concocted yet another story: The Sept. 26 result was a "false positive."


FDA: Antigen tests for COVID-19 are rapid but can lead to false positives

Boston Herald

The U.S. Food and Drug Administration is alerting clinical laboratory staff and health care providers that false positive results can occur with antigen tests for the virus that causes COVID-19. In a letter to stakeholders, the FDA said Tuesday that while antigen tests can be used for the rapid detection of SARS-CoV-2, false positive results can occur, especially if users don't follow the instructions. "The FDA is aware of reports of false positive results associated with antigen tests used in nursing homes and other settings and continues to monitor and evaluate these reports and other available information about device safety and performance," the letter said. A Boston-area infectious disease expert said the antigen tests are good for large scale screening, when used properly, but must be followed up with more accurate testing. "If you are testing a population at low risk, it's fine to do these tests for screening," said Dr. Daniel Kuritzkes, chief of the Division of Infectious Diseases at Brigham and Women's Hospital.


Study finds Google system could improve breast cancer detection - Reuters

#artificialintelligence

CHICAGO (Reuters) - A Google artificial intelligence system proved as good as expert radiologists at detecting which women had breast cancer based on screening mammograms and showed promise at reducing errors, researchers in the United States and Britain reported. The study, published in the journal Nature on Wednesday, is the latest to show that artificial intelligence (AI) has the potential to improve the accuracy of screening for breast cancer, which affects one in eight women globally. Radiologists miss about 20% of breast cancers in mammograms, the American Cancer Society says, and half of all women who get the screenings over a 10-year period have a false positive result. The findings of the study, developed with Alphabet Inc's (GOOGL.O) DeepMind AI unit, which merged with Google Health in September, represent a major advance in the potential for the early detection of breast cancer, Mozziyar Etemadi, one of its co-authors from Northwestern Medicine in Chicago, said. The team, which included researchers at Imperial College London and Britain's National Health Service, trained the system to identify breast cancers on tens of thousands of mammograms.


AI system is better than human doctors at predicting breast cancer

New Scientist

An artificial intelligence system is better at predicting breast cancer than radiologists, according to a UK-US study led by Google Health. The team behind the technology hopes it can be widely deployed to improve cancer care. Catching cancer early improves the chances of treatment succeeding. That is why many countries routinely screen women for signs of breast cancer using an X-ray scan called a mammogram. In the UK, women aged between 50 and 71 are invited for a scan every three years.


Google system could improve breast cancer detection - study

#artificialintelligence

In the United States, only one radiologist reads the results and the tests are done every one to two years. In Britain, the tests are done every three years, and each is read by two radiologists. When they disagree, a third is consulted.'SUBTLE CUES'In a separate test, the group pitted the AI system against six radiologists and found it outperformed them at accurately detecting breast cancers.Connie Lehman, chief of the breast imaging department at Harvard's Massachusetts General Hospital, said the results are in line with findings from several groups using AI to improve cancer detection in mammograms, including her own work.The notion of using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics, yet CAD programs have not improved performance in clinical practice.The issue, Lehman said, is that current CAD programs were trained to identify things human radiologists can see, whereas with AI, computers learn to spot cancers based on the actual results of thousands of mammograms.This has the potential to "exceed human capacity to identify subtle cues that the human eye and brain aren't able to perceive," Lehman added.Although computers have not been "super helpful" so far, "what we've shown at least in tens of thousands of mammograms is the tool can actually make a very well-informed decision," Etemadi said.The study has some limitations. Most of the tests were done using the same type of imaging equipment, and the U.S. group contained a lot of patients with confirmed breast cancers.Crucially, the team has yet to show the tool improves patient care, said Dr Lisa Watanabe, chief medical officer of CureMetrix, whose AI mammogram program won U.S. approval last year."AI


Study finds Google system could improve breast cancer detection

The Japan Times

CHICAGO – A Google artificial intelligence system proved as good as expert radiologists at predicting which women would develop breast cancer based on screening mammograms and showed promise at reducing errors, researchers in the United States and Britain reported. The study, published in the journal Nature on Wednesday, is the latest to show that artificial intelligence (AI) has the potential to improve the accuracy of screening for breast cancer, which affects one in eight women globally. Radiologists miss about 20 percent of breast cancers in mammograms, the American Cancer Society says, and half of all women who get the screenings over a 10-year period have a false positive result. The findings of the study, developed with Alphabet's DeepMind AI unit, which merged with Google Health in September, represent a major advance in the potential for the early detection of breast cancer, said Mozziyar Etemadi, one of its co-authors from Northwestern Medicine in Chicago. The team, which included researchers at Imperial College London and Britain's National Health Service, trained the system to identify breast cancers on tens of thousands of mammograms.


False Positives Are a True Negative: Using Machine Learning to Improve Accuracy

#artificialintelligence

Machine learning has grown to be one of the most popular and powerful tools in the quest to secure systems. Some approaches to machine learning have yielded overly aggressive models that demonstrate remarkable predictive accuracy, yet give way to false positives. False positives create negative user experiences that prevent new protection from deploying. IT personnel also find these false alarms disruptive when they are working to detect and eliminate malware. The Ponemon Institute recently reported that over 20 percent of endpoint security investigation spending was wasted on these false alarms.


Why do we fall for false positives even though they're common?

New Scientist

Last month, the drinking water in a Colorado town was declared unsafe, because it had been contaminated by an ingredient from cannabis. It took two days to discover that this was not the case – a water test had turned up a false positive result. In fact, false positives are widespread in our everyday lives, and we seem to have an innate inability to get to grips with them. The fuss in Hugo, Colorado – a state where cannabis use is now legal – began when a county employee administering a test for drug use decided to use the same kind of test on tap water, rather than saliva, in an attempt to rule out a false positive. When the water tested positive too, it was assumed the test kit was a dud.